Goto

Collaborating Authors

 robot learn


Like babies and dancers, this robot learns from studying itself

Popular Science

Researchers from Columbia University have successfully developed an autonomous robot arm capable of learning new motions and adapting to damage simply by watching itself move. The robot observed a video of itself and then used that data to plan its next actions--a practice the researchers refer to as "kinematic self-awareness." This unique learning process is designed to mimic the way humans adjust certain movements by watching themselves in a mirror. Teaching robots to learn this way could reduce the need for extensive training in bespoke 3D simulations. It could also one day make future autonomous robots operating in the real world better equipped to adapt to damage and environmental changes without constant human intervention.


A Revolution in How Robots Learn

The New Yorker

A disproportionate amount of the primary motor cortex, a region of the brain that controls movement, is devoted to body parts that move in more complicated ways. An especially large portion controls the face and lips; a similarly large portion controls the hands. A human hand is capable of moving in twenty-seven separate ways, more by far than any other body part: our wrists rotate, our knuckles move independently of one another, our fingers can spread or contract. The sensors in the skin of the hand are among the densest in the body, and are part of a network of nerves that run along the spinal cord. "People think of the spinal column as just wires," Arthur Petron, a roboticist who earned his Ph.D. in biomechatronics at M.I.T., said.


A way to let robots learn by listening will make them more useful

MIT Technology Review

Researchers at the Robotics and Embodied AI Lab at Stanford University set out to change that. They first built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone designed to filter out background noise. Human demonstrators used the gripper for a variety of household tasks and then used this data to teach robotic arms how to execute the task on their own. The team's new training algorithms help robots gather clues from audio signals to perform more effectively. "Thus far, robots have been training on videos that are muted," says Zeyi Liu, a PhD student at Stanford and lead author of the study.


Toyota's Robots Are Learning to Do Housework--By Copying Humans

WIRED

As someone who quite enjoys the Zen of tidying up, I was only too happy to grab a dustpan and brush and sweep up some beans spilled on a tabletop while visiting the Toyota Research Lab in Cambridge, Massachusetts last year. The chore was more challenging than usual because I had to do it using a teleoperated pair of robotic arms with two-fingered pincers for hands. As I sat before the table, using a pair of controllers like bike handles with extra buttons and levers, I could feel the sensation of grabbing solid items, and also sense their heft as I lifted them, but it still took some getting used to. After several minutes tidying, I continued my tour of the lab and forgot about my brief stint as a teacher of robots. A few days later, Toyota sent me a video of the robot I'd operated sweeping up a similar mess on its own, using what it had learned from my demonstrations combined with a few more demos and several more hours of practice sweeping inside a simulated world.


Helping robots learn from each other

#artificialintelligence

Just like a Transformer-based language model predicts the next word based on trends and patterns it sees in text, RT-1 has been trained on robotic perception data – images – and corresponding actions so it can identify the next most likely behavior a robot should take. For example, the model is trained on many examples of how a robot should pick up a banana, the robot would learn how to identify and pick up a banana even when the banana is seen in a new kitchen surrounded by objects it's never seen before. This approach enables the robot to generalize what it's learned to new tasks, handling new objects and environments based on experiences in its training data -- a rare feat for robots, which are typically strictly coded for narrow tasks.


How will robots learn? Will there be a Tesla Bot Skill Marketplace? - techAU

#artificialintelligence

When Elon unveiled the Tesla Bot, he said, a humanoid robot will be useful if it can navigate through the world without being explicitly trained. Without explicit line-by-line instructions.. Can you talk to it with phrases like'please pick up that bolt and attach it to the car with that wrench'.. it should be able to do that'. He went on to say that It should be able to understand'please go to the store and get me the following groceries' and said, I think we can do that. This challenge is immense and I think it's fun to break down what exactly is required to enable something like that. Firstly, there's voice recognition to start.


How robots learn to hike (w/video)

#artificialintelligence

To navigate difficult terrain, humans and animals quite automatically combine the visual perception of their environment with the proprioception of their legs and hands. This allows them to easily handle slippery or soft ground and move around with confidence, even when visibility is low.


How Robots Learn To Hike - AI Summary

#artificialintelligence

ETH Zurich researchers led by Marco Hutter developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-meter-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. "The reason is that the information about the immediate environment recorded by laser sensors and cameras is often incomplete and ambiguous," explains Takahiro Miki, a doctoral student in Hutter's group and lead author of the study. Before the robot could put its capabilities to the test in the real world, the scientists exposed the system to numerous obstacles and sources of error in a virtual training camp. The ETH Zurich robot automatically and quickly overcame numerous obstacles and difficult terrain while autonomously exploring an underground system of narrow tunnels, caves, and urban infrastructure.


How robots learn to hike

#artificialintelligence

Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-minute hike. That's 4 minutes faster than the estimated duration for human hikers -- and with no falls or missteps. This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics. "The robot has learned to combine visual perception of its environment with proprioception -- its sense of touch -- based on direct leg contact. This allows it to tackle rough terrain faster, more efficiently and, above all, more robustly," Hutter says.


How robots learn to hike

#artificialintelligence

ETH Zurich researchers led by Marco Hutter developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Thanks to machine learning, the robot can combine its visual perception of the environment with its sense of touch for the first time. Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-meter-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical meters effortlessly in a 31-minute hike. That's 4 minutes faster than the estimated duration for human hikers--and with no falls or missteps.